Innovation, Meet Intention: Using AI in Canvas to Elevate Teaching, Learning, and Administration

Share

Join us to learn more about Instructure’s thoughtful, safety-first approach to AI in education. Discover how our open and extensible platform empowers informed choices by surfacing the “nutrition facts” on third party AI tools, while also offering new native functionality like discussion summaries, translation, and more. Don’t miss out on the opportunity to hear directly from our team as well as customers and partners in our AI ed-cosystem.

Share
Video Transcript
Thank you for joining us at nine AM on the final day of InstructureCon. We appreciate you making it here. Yay. Hopefully, you had fun last night. I learned to play craps. It was, somewhat productive.

If you've if you have a chance to play craps with Zach Pendleton, our chief architect, he's he's a ringer. I don't know how he knows how to play it, but, thank you so much. So this session is innovation meet intention with AI. And we're gonna be diving a little bit into, you know, some of the features that you've seen previously as well as, the experience with Praxis, and, Felice is gonna share her experience as well. So, my name is Ryan Lufkin.

I'm the the chief architect, chief. Ryan Lufkin, I'm the vice president of, academic strategy here at Instructure, and I'm gonna have, my my fellow panelists introduce themselves. Alright. Oh, my mic is hot. Good.

I'm Christy Lindell, VP of product for an area we call EdTech effectiveness. And so, we are lucky enough to work across our, learning network, to work with institutions, to help, you all manage your EdTech ecosystems and also the work we do with our valued partners, to bring, products in a really seamless way into our ecosystem. So really excited to talk about AI this morning. Hello. David James Clark the fourth.

That's a mouthful. CEO of Praxis AI, and I'm very, honored and proud to be her vendor. Thank you, David. And Ryan, I can tell you were playing craps last night there. My name is Felice Banner.

I'm the director of e learning at Champlain College online. We are, associated with a traditional campus, but the online, school offers over forty degree programs, certificate programs, graduate, undergraduate. The or the college has been around since the eighties online, and I've been working in this space since the nineties, pushing that boulder uphill. I am one of the pioneers in online learning and proud to say that at this point in my life. Yeah.

So we're gonna walk through, this is our quick agenda. We're gonna recap some of what you saw yesterday, and some things you may have not seen yesterday, as well. We're actually gonna talk a little bit about building trust with AI tools and and why that's so important across your campuses. And then you're gonna hear from our partners, Practice AI and Champlain College around their experience and and some of the interesting stuff they've been doing. But, you know, you've seen this before.

You saw it last year. Intentional, safe, equitable. You know, we have we have very clear guiding principles as we develop our, features in AI, but also as we work with our partners. We were choosing the best, most trustworthy partners, in the market to actually move forward with. And I think that's incredibly important.

You know, last year, somebody said, you know, I have a trust relationship with you, but I don't necessarily have a trust relationship with the guy standing behind you. And so we wanted to take that very seriously and and build that trust, so that then you can pass that trust on to your your students and educators. And, you know, yesterday, you saw a smart search. This seems like one of those simple features that, if you ever had bad, a bad search experience, you know how frustrating that can be. And AI really does because it understands context.

It understands, more of the search, mentality. It's not just looking for keywords. It really is incredibly compelling. Discussion summaries is when we hear over and over again from educators will just save so much time, and and you'll start seeing, more of that. And then inbox and discussion translation.

You know, translation is a hard tool, a hard feature to roll out because there's so many different use cases and so much granularity of control. So this is really exciting to be able to to roll this out now and and, you know, look more forward to more of that in the future. Hey, Ryan. Yeah. We should also just, as a reminder, talk about there was a lot of enthusiasm about the ask my data anything as part of the new intelligent insights.

Yes. And, actually, one of the things I love about that is when we I demonstrated it down at Texas a and m, a couple of weeks ago. And what they were so excited about is it didn't just do the search. It generated the SQL so that for, you know, kind of novice user, you could you could just actually search for a more advanced user. You could actually cut and paste the SQL and modify that and play with the SQL.

So just kind of a very thoughtful approach from the product team, I think, on that, that people have been pretty excited about. If one thing has become clear, I I think we wanna make sure that that everybody that leaves this conference understands that we we have a very, very clear approach to AI, and it's one about choice. Right? There's really kind of three prongs to that. One is, as a SaaS solution, we're gonna develop the features that make sense for educators and students, that we can put into the product without raising the cost of the product. That's, you know, there's limitation on that.

And we talked a little bit about, you know, cost being one of the hidden factors of AI that kinda came late to the discussion. Right? We as we started rolling out some of these solutions, we we started realizing the the cost was was too much. Right? And so we're gonna do what makes sense, for the most of our users, and and limit it to that. Right? The second part is we're gonna really expand the plumbing within, Canvas and the other Instructure solutions to offer third party LTI, integrations, LTI, open APIs. You know, Christy's gonna talk a little bit more about that, but hopefully, you've heard about that in some of the sessions as well.

And then the third is if you wanna stand up your own large language models and leverage those same pathways, you can do that. And we've seen schools like University of Central Florida do that. And increasingly, you can work with AWS and stand up one of their, large language models, within the the bedrock framework. It really is a a incredible approach. It's a thoughtful approach.

You know, Zach Pendleton, who's I've worked with a lot, I think maybe standing in the back there. You know, spent a lot of time actually, like, going through this. We didn't wanna be first to market. We wanna be very deliberate. And so I I think this is is really resonating.

Hopefully, you find that as well. I think one of these is interesting right now is, you know, there there's, there's a slide I like to show that has, has three years, nineteen eighty two, eighty three, eighty four. And if you survived the eighties like I did, then you remember the movies Blade Runner, War Games, and Terminator. Right? We were conditioned to fear AI. We we were taught that AI is evil.

And so when I got here, of course, you were scared of it. Right? It's it's one of those things where naturally, when you're conditioned to something, you respond that way. And so there's a lot of overcoming, the the fear and the anxiety around AI. You know, the the stats that I think, Sharon showed yesterday around AI adoption, I think we're almost even a little, a little optimistic. I I don't know as many, users are actually educators are adopting this.

There's still so many, that are that are fearing and very different from k twelve to to higher education. My daughter, who's a freshman freshman at the University of Utah, just finished her freshman year. She's got professors that are building AI into the process. They're really making it. They're like getting notes on paper, really summarize complex topics, things like that, building into the process so they understand how to ethically use it properly.

My son who's in junior high, his educators are still kind of hung up on the fact that it's cheating to them. They don't want students using it, and they're not being taught that. I kind of fear we're gonna have this wave of students hit higher ed that don't know the ethics of using AI properly, and and we'll see how that goes. But we need to fear the right things. So then when we fear the right things, we can solve the right problems.

And and there's some there's some valid fears. Right? Student, information is in our industry, and what we do, printing student information is is top priority. Right? But educator intellectual property, right, not having that pulled in. I think everybody, may have heard the story of the the Samsung engineer who decided to test, some code in chat g b t, uploaded its chat public version of chat g b t. Well, now that's ingested into into the large language model and is no longer, protectable.

Right? I don't think they work at Samsung anymore. But but it's it's that protection for intellectual property we need to make sure, we're we're, making a priority. One thing I actually do fear a lot is the deep fake. You know, you see this when Tom Cruise and and some of the some of the deep, the AI generated images and audio. You know, we've already seen that in New Hampshire's primary election.

There was there was, you know, robocalls with the president's voice, and and, it's actually changed some legislation. The deep fake imagery is really powerful, and I think one of the things that we need to make sure we're doing is focusing on AI literacy to prevent that in the future. It's it's one of those that those that are in the know and understand what AI is capable of can identify that many, many can't, and we've gotta focus on that. Obviously, bias is, is an ongoing discussion, how we how we train bias out. You heard that in the key, keynote yesterday where if you put in, you know, generate a photo of a of a CEO for an American company, you will get a white male.

Right? And it's it's because there's bias built into the data that that model is trained on. How do we how do we avoid that? How do we not perpetuate that? Obviously, making sure students are using it productive and understand that not creating a digital divide, these these, products cost money. And that this last point, you know, I think when we originally got excited about AI, we're like, you can do so many things. And you roll this out at scale across a lot of institutions, it becomes very expensive. And we need to make sure that we are building in the cost and not creating a situation a situation where some students can afford it and some students can't.

But you hear a lot of calls for, increased regulation around AI. And and especially last year, I think some of it's calmed down a little bit. You know, we were banning AI last year. People were saying it has to be regulated. But we have regulations in place, right, around data security, data privacy, accessibility.

We have these these these, guardrails that we already conform to on a daily basis. What we need to make sure we do is eat our vegetables. Right? This is a a term that I think Zach Pendleton came up with. But this idea that, like, as we apply these AI models, we've got the we've got the guide rails guard rails in place. Let's make sure that we are aligned to those guard rails.

And so we came up with the metaphor of of a nutritional fax card. Right? As you go to the grocery store, you can pull up a cereal and look at, you know, Cocoa Puffs versus Frosted Flakes and understand the differences in calories. And so our our product team and our partner team, have worked on this this metaphor and really creating transparency, building trust through, through as much visibility as we can provide. You know, ultimately, AI models are a bit of a black box. We don't understand what happens when data goes into that box and what comes out.

But we can build trust about, you know, where they're hosted, what model is being used, you know, what's being ingested, what's not. What is the what is the desired outcome? Be very intentional with the outcomes, and and be clear on that. And so this is something that you'll see rolled out, and Christy is actually gonna gonna show us a little bit more about how that's put into place. But I think building trust building trust, you know, from you to us, building trust with our customers, building trust with end users is incredibly important. So you'll see a lot of that kind of underlying everything we do today.

Alright. Thanks, Ryan. And speaking of healthy choices, I don't know that we've all made super healthy choices this week. I know I may not have. Yeah.

And we wanna ensure that the eat your vegetables, you know, that we have a healthy ecosystem. And so the nutrition fact labels are an attempt to help you make, evidence based decisions, about you know, that are informed and kind of take the mystery out of the black box. There were a couple of things on the nutrition facts that did come in from feedback directly, in addition to what Ryan's talking about of how the models were built and what data might be shared and ensuring, data privacy understanding. Also a lot of conversation that came up with our our customers over the last few months was, is there a human in the loop? What kind of control or insight, especially that my instructors have? And then in addition, also, just ensuring, can I turn this on or turn this off? Do I have control? Do I have choice? That's a common common, need. And that's something that early on we decided, look.

We're gonna make sure there's always a human loop. There's, you know, AI is never the the the final step. Right? It is early in the process. There's always a human checking that box and making it acceptable. And I think your point is really valid where, we rolled out the first version of this right before ASU GSV, And we got feedback from our partners, from our AI advisory council that is, customers just like you, who actually said, you know what? I wanna see this.

I wanna see more of this. Drill into here. And so this has evolved over time, and I I love where we've ended up. Yeah. David here filled out our very first forum and gave me plenty of feedback.

Thank you very much. So you can find these fact labels in a couple of ways. So for our own products, for the features and functionality that Ryan talked about, these are all in the community. And so you'll find those fact labels. In fact, just thinking of it right now, we should probably put our partners' fact labels in the community as well.

However, you can find those labels currently in, a thing that we launched last year called the AI emerging marketplace. It was intended to be a way for you to discover. It's just a a a URL that apparently is not on the slide, but we can get it to you. Actually, if you Google it, it pops right up. Go Google Instructure AI.

It pops right up. Yeah. So we what we did was showcase a handful of our partners, especially those early innovators like Praxis, that, we wanted to make those tools known and visible to you. And then so that's the first place for our partners tools that we are including these, nutrition facts. So you can see an example here of of one that David completed.

And speaking of the marketplace, this is, generally what it looks like. You can click in and you can see information about those tools. What we want to do, though, and what we're planning to do is actually release a, Instructure Edco EdTech Collective marketplace at the end of this month. And that's gonna look something like this, where it's going to include all of our partners' tools. It will eventually replace the partner, listings in the community, and it will be the place that you can find our partners and our partners' products and the information that you need to make healthy evidence based, good decisions about what tools you wanna adopt.

If you've been around for a long time, you know the Edu app center. You know this kind of general listing. We got a lot of feedback that people wanted more more information, more vetting, more you know, I wanna be able to learn more when I click through. And so I I love that we've gotten here. If you've, you know, if you've been around for a long time, you you you should be excited about this evolution as well.

Yeah. So you'll be hearing more about this. And then the other thing, it was probably one of the highlights of my day. And just as an aside, the highlight of my day this morning was running into Felice at the elevator. And I believe I hope you don't mind that I share this, but Felice said this, conference brought her back to life.

And so, I think that's pretty awesome. You know? We're here to be energized and learn and learn from each other. So thanks for sharing that. We we were talking about that earlier, and I do I leave this conference every year with so much energy, so excited to to go back and and, you know, do my work. I hope everybody leaves with that excitement.

If you haven't, come talk to us, and we'll we'll make sure, we we zhuzh you up as much as possible. But really, it's I hope I hope the content and I hope the the energy of this conference really leaves you, you know, as excited as we all are. It's it's incredibly productive. Okay. So in my head, I just have the s and l.

We will pump you up. Yeah. We will pump you up. You know, to thank you very much for that. Right? You up.

Yeah. Okay. So back to this. The highlight for me yesterday was watching a a standing room only room, and, two of our engineers and two of our product managers were talking about the new Canvas apps page. And this is the place that you can go in Canvas and discover all of the LTI integrated apps, that are available for you.

You can get that information. It's not just like the EduApp one point one tools. It'll be all of the tools. And our our team has been working so so so hard to make sure that we have a global unique ID for those those apps. And that enables so much, sharing of information of these tools to allow you to make the the best decisions, but also the streamlined, I'm sure many of you have experienced the seventeen plus click experience to install LTI apps, copying and pasting and copying and pasting and going to multiple places where we're moving to a one click install, but that the tools that are using dynamic registration, will be supported.

You can discover your app, then you can go through the configuration. And it's this really nice in, in process module, modular, wizard that just allows you to really easily configure and control and install and manage those tools. And so we're so so so excited to see that improvement and so are so were our customers. And then it also enables LTI reporting. So great.

I've got these tools installed, and now I want to know, what's getting used. How do I improve adoption? How do I manage that? So that was a really fun thing, to see and we're really excited to make that available. So that's a little bit of what that looks like. I think, I worked with Zach and Ryan and Deepa and a handful of other people to put together a view of what really was the foundation of Canvas. And I think the continued foundation of Instructure is to make sure that we have an open ecosystem and an integration framework for everyone to come together and build on the platform to really make our teaching and learning experience seamless and robust.

So, you can kind of see some of the things that many of you already interact with and take advantage of. And what I wanted to point out is there is a couple of new things as we learned about things like AI assistance or digital twins. To enable those tools, we really needed to look at, well, what are the placements that best support that in Canvas? What are what is the data that these tools need to be able to really provide the right insight for the user, whether that's a student or an instructor? And so we were really excited to make available. It's yet to be named. You know, we have over thirty placements in Canvas already, but now we have thirty one.

Navigation is the best name, so we'll roll out something better. But but we're thinking of it as it's a top navigation with a content drawer and sharing information about a page. But what it really allows is for a student to not have to leave their learning environment, see the content along with an AI assistant, so they can interact naturally with their assignments and get the help and the support that they need. And it has the right contextual information to provide really good, insightful support for the student. And then the smart search API.

I don't know. Zach's back there. Something about vectorized content and blah blah blah, Zach. But, we're providing additional content and data through this API. Again, to, enable these tools to really work at the high quality level that you would expect.

Over to, David from Praxis AI, one of our valued partners that has a digital twin that has been well tested and well used in the market and is taking advantage of it's an LTI one point three, using dynamic registration in the AI marketplace with a nutrition fact using the smart search API and the new, top navigation. All the boxes. So let's see what it looks like, and I'll turn it over to David. Thank you. Thank you.

And that's not all. It comes with a whole set of knives. Okay. So before we get to the demo, and this is gonna be exciting. I want Felice to talk more than me.

That was my goal this morning. But I wanted to share some best practices and some lessons learned from the last twenty months. So we released our first generative AI, teaching assistant. At the time, it was, Clemson University in January of twenty twenty three, so two months after ChatGPT became available. And we're on our eleventh version, of that technology because we got hounded, as you can imagine, by student and faculty.

You know, feedback is a gift, and so we take that feedback very seriously. And the when I look back at it, it it was, you know, it it was pretty barbaric, in January of twenty twenty two. November thirtieth twenty twenty two. My birthday Yeah. So literally, there's there was a month.

Right? So to to even have a a functioning tool at that point is pretty impressive. Thank you. For so so why did we even build it in the first place? I mean, what problem are we solving? And so I think we can all agree that the number one most critical need of students is attention. How many times have you heard, I wish I had my own personal professor. Right? And then on the professor's side, their most critical need is time, and those two are kind of at odds.

I wish I could clone myself. Right? I think you know where this is going. So until recently, cloning did cloning faculty in a digital way was science fiction. But now it's kinda science fact in the sense that generative AI can help you build at scale personalized, we call them digital twins, of your faculty and deliver them in this really cool amazing open, Canvas LMS. And we're gonna show you an example of that in just a moment, professor Quirk at at Champlain.

And, but the first first thing I wanna do is share kind of some of our experiences and best practices around what does it look like to put quote unquote ChattGPT into an education context. Because there's all sorts of stuff to fear. And talk about fearing the right things. So I'm I'm gonna interrupt you. But first, Tom, here.

You know, David talked about the faculty resources is time. Right? And the students want attention, and, you know, for me, I'm also thinking cost and money. And at Champlain College we put a lot of money into advising. We pay for advising services, a tutoring service that's live, and our students use it very heavily. And there are specific courses where the students use it more and more and more and more.

It's one of the best things our institution does, how we support our students. So it's not just this faculty time, it's this advising time that really mattered to me, and that problem is a problem that I was looking to solve. Great. Thank you. So we'll start with trusted sources.

There's five things I'm gonna I'm gonna go through very quickly, and and it kinda begins and ends with trust, and it begins and ends with where are those answers coming from. Right? I mean, you don't want them coming from x or from or from the l l data sets with that bias we talked about earlier, like, is is built into the data the underlying data. Right? Exactly. And so, the most trusted source of data is, probably number one, the faculty. Number two, the course materials.

Number three, there's other trusted sources out there, Google Scholar, PubMed, Wolfram Alpha, things like that. So, right off the bat, we were using the LLM early on, and the faculty were like, I'm getting some bad answers. And so we started working in with the technology that allowed us to vectorize, that's the word of the day, the faculty data and the course materials so that all the answers are coming from the course materials. They're coming from, videos or or, articles that the faculty have uploaded. And if that material isn't available, then it comes from these trusted sources, and you get to control the trusted sources.

So we felt like that was a really important step, But it was pretty I keep using the word barbaric. The first iteration of this was, okay, if you want us to understand the context of your course, you need to export your entire course to an IMSCC file. You need to drop that IMSCC file into our Amazon secure environment. We need to parse it and then vectorize it and then connect it. I mean, it's just this crazy process.

You had to be a pretty hardcore Yeah. To actually go through that process. And and it worked it worked great, but it was a real pain. The crazy process that only took, like, a couple hours and you did that anyway? It was hard. It was hard.

They were scrambling on our side. But so SmartSearch fixes all that. And SmartSearch API, which is amazing, is, we were part of the EdTech Accelerator for AWS. AWS. So we talked to everybody and said, why don't we just vectorize the data where it sits in AWS in the course, and let's point to those embeddings from the digital twin.

And so that's what's happening right now. So thank you, Instructure for making that happen. The second thing is personalization. So the connection between the student and the faculty is really critical and really important. And we wanted every course not only to be specific to the material, but for the student to feel like they were interacting with their faculty, a digital twin, a true digital twin.

So some of these are some of the digital twins you've seen up there. Our generic version is called Priya, but our goal is for there to be no Priya for everybody to personalize it. And this is a way you build a digital twin. So, you know, what makes that up? And I'm gonna let Felice talk a little bit about this experience, but she found a faculty who was really interested. She teaches a marketing research course.

And and talk a little bit about this behavior assessment. I will. And I'm gonna roll it back just a little bit. Okay. I'm very lucky to have to be working in a place that accepts experimentation, and I can fail.

I have room to fail. So I found David not on the marketplace, but in the community conversation, one of the one of the discussion the forums, and, I'm searching in that forum which I want to bring to your attention. It's that networking and making those connections and these organic ways that we find tools. And I immediately reached out to the program director who's most willing to take risk and asked her for the instructor most willing to take risk, and it was the instructor who's doing research in AI. And I'm, you know, don't smack me here on the stage David, but we don't have a digital twin.

We have an assistant in the course. We have a second professor in the course, professor Quirk. Quirk is a, a magazine in marketing research. Mhmm. And so when we went to give Professor Quirk personality, there's this great list of questions that are inherent in the in the app, and I gave that to the the professor.

The professor's name is Tracy. I gave those questions to Tracy, and she was so creative in her responses. She was. And it did not it that's when it moved away from being a digital twin to being another tutor, which is great. Okay, advising tutor, so that's the model I'm looking for.

Right? So we're going in that direction. Tracy gave those those answers back, and then me being who I am, I'm like that's not enough. There's nothing in here about bias. So I add things like, if someone asks for a scenario, don't use generic white names. Right? And when you're, you know, thinking about situations like that just trying to add to trying to mitigate bias in that way and also adding more information about where to find other resources.

So for instance, if someone asks about accessibility services or any, you know, accommodations, here's where, who they should contact and if anyone asks for help using Professor Quirk, come to me, right? So it's you know have them send send them to elearning at champlain dot edu. So the the idea of not just cloning your personality, but creating a new tutor was really exciting for me. I I think the nomenclature there is actually really important because I think there's there is fear that educators will be replaced. Right? And these tools aren't meant to replace an educator. They're meant to extend the reach of the educator.

And so how you position them, what you call them, this is important across your campus. You gotta find those those names that that make sense. It's also important for adoption. For me, that was important for adoption because when you talk about fear, Ryan, it is like I'm gonna get replaced or you won't need me or I can't answer that or you know anything in that space. And as I think about this it really is this tutor in and we can shape him, her, they as however we want them to.

And I always thought professor Quirk was just your quirky personality. So now I didn't know it was the Quirk. There was a there was an actually a magazine name. So, thank you for that. In addition to the personality behavior assessment is visualization.

So So how are they gonna interact? So we came up with this you came up with this this cool avatar. We're experimenting with cloning voice. So I think, all the research says that, students will interact with, or they'll react, in a more favorable way to voice than to those creepy three d avatars that have the dead eyes. But we're experimenting with that too. So we'll see we'll see where professor Quirk ends up.

But for right now, it's audio, you know, as well as text. And then the most important thing really is where is the data? How are we programming professor Quirk's brain? Some of it is in that custom prompt that we'll show you in the demo in just a second, but other things are, you know, there's thirty one files or something that you and Tracy uploaded into, the twin that then makes that content available. Plus you customize the the the trusted sources. Absolutely, and a lot are we work with a parent course model, so our faculty do not develop their courses. Our instructional design team works with subject matter experts to develop every course and then the instructor teaches that course.

It just so happens that Tracy was the subject matter expert on this course and she's teaching the course. I also need to let you know that we pulled this off in one week? One week. Yeah. This is I found David, and we made this happen in time for launch. It's fast.

And and the whole the whole onboarding process is fifteen minutes. It was fifteen minutes. And I wanna talk about the content because I'm, like, the copyright police. So, you know, is this okay with the publisher? Can we put this in here? What can we do? What can't we do? A lot of our courses use OER, so we're very lucky with that, but I immediately reached out to the library and said what can we do? I asked David what are publishers, you know, what are the constraints with publishers? Can we get permission for this? We got permission to use one of the texts that's in the in the course through the library. They took care of that for me.

So, the videos are created in house. What was really great and worked really well is that we have PDF transcripts of all of our videos and those are those were part of the content that was uploaded. Smart search was not turned on yet when I, when I first used Professor Quirk, and I put all of that content in and noticed, and I'm gonna talk about these these pitfalls and the things that can go wrong. I noticed, on day three of our five week make this happen, I noticed on day three that that IMSCC file was in there and anyone could access it. So it was in the file upload.

Yeah. And I'm like get that out of there now, David. Get that out of there right now. And then, thanks we got SmartSearch enabled. Thanks to my friend Josh here sitting in the front row.

We got the SmartSearch enabled and it's beautifully clean. So it's the, you know, with that with that fear and I I do say I'm apprehensive. I'm not afraid. I'm apprehensive, but I'm courageous. And, with that fear and seeing what can and go can and can't go wrong, I I was ready to pull the plug at any second.

I also need to let you know that I didn't tell my boss that I was doing this until I had it up and running. Until I get to show you. Yeah. Oh, he knows already. Okay.

Yeah. Yeah. He knows. So Everyone knows now. Yeah.

Integration, obviously, very important. You have to meet the students where they are, and, you know, we've talked a lot about, you know, the Canvas integration and the importance of that. The next one is, this one is is particularly exciting and scary at the same time, and that is transparency. So, this is something you don't get with chat GPT or or really, you know, any other kind of generic, AI tool, and that is visibility into what your students are doing with it. Blurred out the names here, but we have an entitlement system that's FERPA compliant that says the faculty gets to see what the students are doing, only their students.

Nobody else gets to see this. But one thing that's interesting that's come out of this is some of the faculty are saying, not only am I able to use this for plagiarism and things like that, but more importantly, I'm getting insights into what the students are struggling with. In all of our courses, sixty five percent of the questions are answered by the digital twin, so almost two thirds, versus the instructor. Those are questions and insights the instructor would never know about because the students aren't asking those questions. They're more comfortable talking to Quirk than they are to Tracy.

And so my team, the instructional design team, sees the anonymized data and sees what's not in the course, what are the students struggling with, and the advisors that are associated with this course also see what students are struggling with, what questions they're ask asking. Imagine this loop this that enables iteration of our designs. So this is a huge win for my team. Thank you for that. And then finally, security and privacy, probably the most important thing.

So we anonymize users. None of that information goes to the language model. Every digital twin gets its own secure instance in AWS. None of the IP there's no IP leak. None of that is, sent off to any of the the models.

We vectorize it in place. Right now, it's LLM agnostic. We're, using GPT four o, the open the latest model from OpenAI, but we by the end of the month, we'll have Claude three point five, from AWS Bedrock and Llama. Claude three point five is getting rave reviews. In all our tests, it does a better job maintaining the personality of the digital twin than GPT four o does.

Right. But there's also a lot of really cool things. And, of course, five is on its way. So what's that gonna look like? And then the singularity, and we'll all be robots. But, anyway, let's get to the demo.

So I'm logging in here, and I'm logging in as a student initially. And so this is the marketing research course. This is in our sandbox, and but it's, Felice's course. Right away down on the bottom right corner, professor Quirk is there. So let's go to a let me see, week one and take a look at what who professor Quirk is.

So the student clicks on it and off we go. I just wanna say real quick, the the bouncing head in the bottom corner was bothering me. I was thinking about students, neurodivergent students. You were having clicky flashbacks? Yeah. Yeah.

Yeah. Clicky flashbacks. So I told David, and within an hour, there was code to adjust that. Off. Yeah.

That's that's that feedback is is is That's partnership. Is a gift. So this tells me a little bit, as a student, a little bit about the vision, the personality, you know, what is important to professor Quirk. So right now, we're gonna do a smart search. So what are my assignments in week one? And, again, professor Quirk, in in the model, it knows, oh, okay.

If if I want any kind of information about the syllabus or about the course or anything like that, here it is. I'm gonna grab it from the smart search API. And then, also, obviously, these models are very good at at citing and linking, so I can just click on this link right here, in professor Quirk. It'll open up another window, and now I'm in the assignment. Talk about this pair this sentence we added.

This was your idea. So, the idea being this is actually thanks to Josh. Thank you, Josh. That why won't we have the students just say, you know, submit a draft of my paper before I'm done and review it and provide feedback before I submit this assignment. And I've told David I want a button in there, I told Canvas I want to fill this in, I'm like, you know, I'm the person telling them what I want next, but Right now you just upload it to professor Quirk and and she'll do the review for you before you submit it.

Pretty soon, there'll be a button. So now we're in the course material, and we're studying the market research process. Summarize the role of market research and decision making. So this is an example of, oh, well, we've got some of that information in the textbook, which was uploaded, and we have some of that information on one of the trusted sources external. So we're gonna go out to those two places, and we're gonna pull that content together and create a cohesive response.

This is what these models are amazing at. Again, the natural language. You could also have it talk to you. We're not doing that for audio purposes. And then we one of the pieces of feedback that we got was, you know, I'd like to have a Socratic dialogue.

I'd like my students to have a Socratic dialogue with these things. Conmigo does a really good job of this. And so we came up with this thing called conversational assessment. So you can say, let's have a conversational assessment about marketing research, and it'll ask you questions, and you'll answer those in natural language, and it will score you as you're going, and then give you an ultimate, you know, now I didn't know anything, so I just said quit. And now finally, content curation suggests three videos to teach me about market research and the role it plays in the decision making process.

And so now it's going again out to YouTube or to some of those other trusted sources, and it's finding all the it could be going into the course material, Kaltura, wherever, and it's pulling down those videos. And, again, it's very good at linking, so I can just click on this link, and I'm watching the video. Again, in another window because you wanna open those tabs. You'd never want the per you never want students to leave Canvas. You want students to live in Canvas.

Okay. So, so the other thing I'd like to show is I'm calling it the AI tray. You call it the top and bar navigation. Let's give it a name. Right there's a little plug in the top right corner, and when I clicked on it, boom, now professor Quirk.

This is an LTI integration. The one we've been watching so far is a custom theme integration with a little SDK. I just saw in that demo an LTI that just pulled out right inside of the frame. So that's super cool that now students don't have to leave the content, like you said, Christy, in order to get access. I think what we're hearing, I'll just note, is working in real time.

I mean, this is a lot fast innovation and just responding to feedback and need, including the support that we're providing with tools. So it's been it's really cool to just hear the process. And and I think the other aspect too is there's so many different use cases that this can be applied to. I like usually, you walk through this model, you're like, oh, I wouldn't have thought of that. I wouldn't have thought of that.

And so we're constantly finding new ways to apply AI in these productive ways. I we're just kind of scratching the surface at the end of this week. We are. Absolutely. So here we are.

We logged in as a teacher, and here's all the trusted tools, the personalization, and then here's those onboarding questions you had so much fun with. We had three psychologists write these questions with a prompt engineer. What is the scope of your expertise? What's your coaching and and mentoring philosophy? You know, what what are your long term goals and aspiration? And there's What do you look like? Yeah. What yes. What do you What do you do you have any dietary restrictions? And just to create again, these are optional, but any you know, the more data you add, the more personalized, the digital twin.

And it and then finally, here's the questions that I asked that you just saw, but you get that visibility into what the students are doing. And and you can also click on a a view button and see the actual response. So that is it. So I just wanna add, that it was really important to me that we didn't just put this out there for the students. We had, I did and again, one week we got this done and threw it out there, but I I put together a really careful message for the students on what this is, why it's here, how they could use it and some recommended prompts and also all of the security and the privacy features that were there.

I want to share one thing before we go over to questions because the instructor was kind enough to share a quote with me. It's a little bit long, but I'm gonna read the whole thing. Go Felice. I'm excited that we have a way to introduce an AI tool in courses that enables students to use AI in an ethical and appropriate way relative to coursework that is designed as an opportunity for students to apply new knowledge and as an assessment tool for comprehension, critical thinking, and articulation. My course is about marketing research, a topic that historically students tend to struggle with because of the depth and breadth of content involved in the inclusion of statistics, a topic that is challenging for students with weaker math skills or even math phobia.

Professor Quirk enables these students to access the relevant information they need across modules without feeling overwhelmed by the amount of content in the course. It gives them a tool to ask things like, explain to me when to use a regression analysis for my data speaking to me as though I'm a kindergartner. This has made the course content more access accessible and useful for students. Yeah. Let's Thanks.

So you wanna save some time for questions. If you have, questions for please, David, raise your hand. We'll we'll get a mic to you. While we're walking the mic, I had a quick question. How many students in week two have used this one? By week two, one hundred percent of the students are using professor There you go.

Thank you. And no complaints. Two questions. When you say you use the data, are you referring to just a complete Canvas course data or were there any additional datas that were introduced to train, the model? And the second question is how much involvement did faculty have? Say, for example, you got chemistry professor. Did you have to exclude the faculty only as needed basis, or did they play an active role in that? And I'm glad to hear you're using the, the cloud model because one of the great thing about that model specifically is their focus is on f use of the AI language.

So that's Mhmm. That's great. Yeah. Yeah. Absolutely.

So, I'll go backwards. So the, well or maybe forwards. But the, the model we we're using the base model. So we're using, you know, Claude and and GPT four o, etcetera, but we're using vector technology to, and some k and n, you know, nearest neighbor technology to try and find the most relevant information, within the course data. We're not training models.

We're not fine tuning models or anything like that. So the model and the data, this is really important, are completely and totally separated, and they never mix, which is really important. A lot of people talk about fine tuning models, which is hard and expensive, and it ends up almost kind of polluting the content, and so we don't wanna do that, so they're separated. The faculty don't have to be involved, but we encourage them to be involved. But really the the just like anything, the solution is better the more the faculty are involved, but it works pretty good out of the box, you know, without their involvement.

Yeah. We we went a little long, and everybody's rushing off to the session. We really appreciate your time. If you have additional questions, come up and talk to us, and after the session. But, thank you so much for joining us.

Appreciate it. Thank you, Ryan. Yes. Thank you.
Collapse

Discover More Topics: